# Enterprise-grade AI assistant
Llama 3 8B Instruct Gradient 4194k
An extended long-context model based on Meta-Llama-3-8B-Instruct, achieving 4194K context length support through RoPE theta parameter adjustment
Large Language Model
Transformers English

L
gradientai
244
70
Llama 3 8B Instruct Gradient 1048k
An extended version of Llama-3 8B for long-context processing developed by Gradient, supporting context lengths exceeding 1 million tokens through optimized RoPE theta parameters for efficient long-text handling.
Large Language Model
Transformers English

L
gradientai
5,272
682
Llama 3 8B Instruct 262k
A long-context model extended from Meta-Llama-3-8B-Instruct, supporting 262k tokens context length
Large Language Model
Transformers English

L
gradientai
27.90k
258
Featured Recommended AI Models